Supporting Irregular and Dynamic Computations in Data Parallel Languages
نویسندگان
چکیده
Data-parallel languages support a single instruction ow; the parallelism is expressed at the instruction level. Actually, data-parallel languages have chosen arrays to support the parallelism. This regular data structure allows a natural development of regular parallel algorithms. The implementation of irregular algorithms necessitates a programming eeort to project the irregular data structures onto regular structures. In this article we present the diierent techniques used to manage the irregularity in data-parallel languages. Each of them will be illustrated with standard or experimental data-parallel language constructions. 1 Irregularity and data-parallelism First observe that data-parallelism and task parallelism programming models are derived directly from SIMD and MIMD execution models. The rst trace of data parallelism is seen in the rst supercomputers such as the Cray 1 or the Cyber 205 that provided a pipelined parallel execution model. The access to contiguous or regularly sparse memory slices supported matrix and vector algorithms that are very CPU time intensive. The following generations of pipelined machines possessed instructions to access sparse vectors in the memory (operations of \gather/scatter"). These new instructions introduced irregular data structure handling. Many applications using these type of structures (sparse matrix, Mont e Carlo...) then beneeted from the power of these supercomputers while preserving the vector programming model. With the emergence of parallel and massively parallel machines, where the memory is physically distributed on a large number of processors, new parallel programming techniques appeared. In order to preserve the knowledge base, a programming model similar to the vector model was proposed: the data-parallel model. Here vectors or matrices are distributed across all the processors. Parallel operations are processed simultaneously by the processors. Here again, regular structure handling involving only regular communication diagrams between processors have rapidly ben-eeted from the growing power of these new machines. On the other hand the use of irregular structures leads to irregular general communications; their implementation requires a particular eeort of the programmer that ensures a balance between processing and communication time by a correct distribution of data.
منابع مشابه
Irregular Data-parallel Objects in C++ 1 Irregularity and Data-parallelism
Most data-parallel languages use arrays to support paral-lelism. This regular data structure allows a natural development of regular parallel algorithms. The implementation of irregular algorithms requires a programming eeort to project the irregular data structures onto regular structures. We rst propose in this paper a classiication of existing data-parallel languages. We brieey describe thei...
متن کاملThe Illinois Concert System: Programming Support for Irregular Parallel Applications
Irregular applications are critical to supporting grand challenge applications on massively parallel machines and extending the utility of those machines beyond the scientiic computing domain. The dominant parallel programmingmodels, data parallel and explicit message passing, provide little support for programming irregular applications. We articulate a set of requirements for supporting irreg...
متن کاملProgramming Models, Compilers, and Algorithms for Irregular Data-Parallel Computations
Advances in parallel computing have made it clear that the ability to express computations in a machine-independent manner and the ability to handle dynamic and irregular computations are two necessary features of future programming systems. In this paper, we describe the nested data-parallel model of programming, which has both these capabilities. We present an intermediate-level language call...
متن کاملParallelizing irregular and pointer-based computations automatically: Perspectives from logic and constraint programming
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task...
متن کاملArchitectural Support and Mechanisms for Object Caching in Dynamic Multithreaded Computations
High-level parallel programming models supporting dynamic fine-grained threads in a global object space are becoming increasingly popular for expressing irregular applications based on sophisticated adaptive algorithms and pointer-based data structures. However, implementing these multithreaded computations on scalable parallel machines poses significant challenges, particularly with respect to...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1996